- Home
- Search Results
- Page 1 of 1
Search for: All records
-
Total Resources4
- Resource Type
-
0003000001000000
- More
- Availability
-
40
- Author / Contributor
- Filter by Author / Creator
-
-
He, He (4)
-
Joshi, Nitish (4)
-
Chen, Angelica (2)
-
Ma, Johnny (2)
-
Nangia, Nikita (2)
-
Padmakumar, Vishakh (2)
-
Pang, Richard Yuanzhe (2)
-
Parrish, Alicia (2)
-
Phang, Jason (2)
-
Bowman, Samuel (1)
-
Bowman, Samuel R. (1)
-
Chen, Danqi (1)
-
Feng, Shi (1)
-
Friedman, Dan (1)
-
Puli, Aahlad Manas (1)
-
Ranganath, Rajesh (1)
-
Si, Chenglei (1)
-
Thompson, Jana (1)
-
Thompson, Jana. (1)
-
Wald, Yoav (1)
-
- Filter by Editor
-
-
& Spizer, S. M. (0)
-
& . Spizer, S. (0)
-
& Ahn, J. (0)
-
& Bateiha, S. (0)
-
& Bosch, N. (0)
-
& Brennan K. (0)
-
& Brennan, K. (0)
-
& Chen, B. (0)
-
& Chen, Bodong (0)
-
& Drown, S. (0)
-
& Ferretti, F. (0)
-
& Higgins, A. (0)
-
& J. Peters (0)
-
& Kali, Y. (0)
-
& Ruiz-Arias, P.M. (0)
-
& S. Spitzer (0)
-
& Sahin. I. (0)
-
& Spitzer, S. (0)
-
& Spitzer, S.M. (0)
-
(submitted - in Review for IEEE ICASSP-2024) (0)
-
-
Have feedback or suggestions for a way to improve these results?
!
Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher.
Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?
Some links on this page may take you to non-federal websites. Their policies may differ from this site.
-
Si, Chenglei; Friedman, Dan; Joshi, Nitish; Feng, Shi; Chen, Danqi; He, He (, Association for Computational Linguistics)In-context learning (ICL) is an important paradigm for adapting large language models (LLMs) to new tasks, but the generalization behavior of ICL remains poorly understood. We investigate the inductive biases of ICL from the perspective of feature bias: which feature ICL is more likely to use given a set of underspecified demonstrations in which two features are equally predictive of the labels. First, we characterize the feature biases of GPT-3 models by constructing underspecified demonstrations from a range of NLP datasets and feature combinations. We find that LLMs exhibit clear feature biases—for example, demonstrating a strong bias to predict labels according to sentiment rather than shallow lexical features, like punctuation. Second, we evaluate the effect of different interventions that are designed to impose an inductive bias in favor of a particular feature, such as adding a natural language instruction or using semantically relevant label words. We find that, while many interventions can influence the learner to prefer a particular feature, it can be difficult to overcome strong prior biases. Overall, our results provide a broader picture of the types of features that ICL may be more likely to exploit and how to impose inductive biases that are better aligned with the intended task.more » « less
-
Bowman, Samuel R.; Chen, Angelica; He, He; Joshi, Nitish; Ma, Johnny; Nangia, Nikita; Padmakumar, Vishakh; Pang, Richard Yuanzhe; Parrish, Alicia; Phang, Jason; et al (, NAACL 2022)To enable building and testing models on long-document comprehension, we introduce QuALITY, a multiple-choice QA dataset with context passages in English that have an average length of about 5,000 tokens, much longer than typical current models can process. Unlike in prior work with passages, our questions are written and validated by contributors who have read the entire passage, rather than relying on summaries or excerpts. In addition, only half of the questions are answerable by annotators working under tight time constraints, indicating that skimming and simple search are not enough to consistently perform well. Our baseline models perform poorly on this task (55.4%) and significantly lag behind human performance (93.5%).more » « less
-
Pang, Richard Yuanzhe; Parrish, Alicia; Joshi, Nitish; Nangia, Nikita; Phang, Jason; Chen, Angelica; Padmakumar, Vishakh; Ma, Johnny; Thompson, Jana; He, He; et al (, Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies)
An official website of the United States government

Full Text Available